Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Extended isolation forest algorithm based on random subspace
XIE Yu, JIANG Yu, LONG Chaoqi
Journal of Computer Applications    2021, 41 (6): 1679-1685.   DOI: 10.11772/j.issn.1001-9081.2020091436
Abstract415)      PDF (1335KB)(461)       Save
Aiming at the problem of excessive time overhead of the Extended Isolation Forest (EIF) algorithm, a new algorithm named Extended Isolation Forest based on Random Subspace (RS-EIF) was proposed. Firstly, multiple random subspaces were determined in the original data space. Then, in each random subspace, the extended isolated tree was constructed by calculating the intercept vector and slope of each node, and multiple extended isolated trees were integrated into a subspace extended isolation forest. Finally, the average traversal depth of data point in the extended isolation forest was calculated to determine whether the data point was abnormal. Experimental results on 9 real datasets in Outliter Detection DataSet (ODDS) and 7 synthetic datasets with multivariate distribution show that, the RS-EIF algorithm is sensitive to local anomalies and reduces the time overhead by about 60% compared with the EIF algorithm; on the ODDS datasets with many samples, its recognition accuracy is 2 percentage points to 12 percentage points higher than those of the isolation Forest (iForest) algorithm, Lightweight On-line Detection of Anomalies (LODA) algorithm and COPula-based Outlier Detection (COPOD) algorithm. The RS-EIF algorithm has the higher recognition efficiency in the dataset with a large number of samples.
Reference | Related Articles | Metrics
Improved wavelet clustering algorithm based on peak grid
LONG Chaoqi, JIANG Yu, XIE Yu
Journal of Computer Applications    2021, 41 (4): 1122-1127.   DOI: 10.11772/j.issn.1001-9081.2020071042
Abstract344)      PDF (1096KB)(576)       Save
Aiming at the difference between the clustering effects of wavelet clustering algorithm under different grid division scales, an improved method based on peak grid was proposed. The algorithm mainly aimed at improving the detection method of connected regions in wavelet clustering. First, the spatial grids after wavelet transform were sorted according to the grid values; then, the breadth-first-search method was used to traverse each spatial grid to detect the peak connected regions in the data after wavelet transform; finally, the connected regions were marked and mapped to the original data space to obtain the clustering result. Experimental results of 8 synthetic datasets(4 convex datasets and 4 non-convex datasets) and 2 real datasets in the UCI database showed that the improved algorithm had good performance at low grid division scales, and compared with the original wavelet clustering algorithm, this algorithm had the requirement for grid division scale reduced by 25% to 60%, and the clustering time reduced by 14% under the same clustering effect.
Reference | Related Articles | Metrics
Dynamic adaptive step-wise bitrate switching algorithm for HTTP streaming
TU Daxi, JIANG Yuhao, XU Cheng, YU Linchen
Journal of Computer Applications    2019, 39 (4): 1127-1132.   DOI: 10.11772/j.issn.1001-9081.2018091893
Abstract406)      PDF (998KB)(210)       Save
Aiming at the problem of low quality of video viewing experience in dynamic network environment with limited cache capacity, a Dynamic Adaptive Step-wise Bitrate Switching (DASBS) algorithm for HTTP streaming considering network bandwidth and cache capacity was proposed. Firstly, a sliding window was used to analyze the recent downloaded fragments, obtaining the initial bandwidth estimation. Then, according to the real-time bandwidth fluctuation degree and cache state, two correction factors were set to further smooth the bandwidth estimation. Finally, a cache threshold was set to establish a correlation with the current bitrate, and the bandwidth estimation and the cache dynamic threshold were used to jointly control the bitrate switching. Experimental results on platform libdash show that DASBS is better than Video Quality Control for QoE (VQCQ) algorithm in switching smoothness and its average bitrate of video playback is higher, which effectively improves the bandwidth utilization. Although the average bitrate is slightly lower than that of Evolution of Adaptive Bitrate Switching (EABS) algorithm, the number of switching times is greatly reduced, improving the switching stability. The experimental results show that the proposed algorithm has high bandwidth utilization, switching smoothness and switching stability in dynamic network environment, which can effectively improve user experience.
Reference | Related Articles | Metrics
Design of controller based on backstepping nonlinear pure feedback system
JIA Fujin, JIANG Yuan
Journal of Computer Applications    2018, 38 (1): 300-304.   DOI: 10.11772/j.issn.1001-9081.2017061365
Abstract659)      PDF (666KB)(303)       Save
To solve the problem that it is difficult to design a controller by previous coordinate transformation because there is nonaffine structure in nonlinear pure feedback system, a new coordinate transformation was proposed and a first-order control input auxiliary system was introduced to deal with the nonlinear pure feedback systems. Firstly, a new state equation was calculated by combining the new coordinate transformation. Secondly, positive definite Lyapunov function was designed for each step based on the backstepping method. Finally, the derivatives of Lyapunov functions were made negative by designing virtual controllers and auxiliary controller, so the tracking problem of nonlinear pure feedback systems was theoretically solved. The experimental results show that the designed auxiliary controller is able to make the state of the nonlinear system bounded globally, and the control output can track the given signal, and the tracking error becomes asymptotically stable.
Reference | Related Articles | Metrics
Whole process optimized garbage collection for solid-state drives
FANG Caihua, LIU Jingning, TONG Wei, GAO Yang, LEI Xia, JIANG Yu
Journal of Computer Applications    2017, 37 (5): 1257-1262.   DOI: 10.11772/j.issn.1001-9081.2017.05.1257
Abstract1085)      PDF (1128KB)(526)       Save
Due to NAND flash' inherent restrictions like erase-before-write and a large erase unit, flash-based Solid-State Drives (SSD) demand garbage collection operations to reclaim invalid physical pages. However, the high overhead caused by garbage collection significantly decrease the performance and lifetime of SSD. Garbage collection performance will be more serious, especially when the data fragments of SSD are frequently used. Existing Garbage Collection (GC) algorithms only focus on some steps of the garbage collection operation, and none of them provids a comprehensive solution that takes into consideration all the steps of the GC process. On the basis of detailed analysis of the GC process, a whole process optimized garbage collection algorithm named WPO-GC (Whole Process Optimized Garbage Collection) was proposed, which integrated optimizations on each step of the GC in order to reduce the negative impact on normal read/write requests and SSD' lifetime at the greatest extent. Moreover, the WPO-GC was implemented on SSDsim which is an open source SSD simulator to evaluate its efficiency. The experimental results show that the proposed algorithm can decreases read I/O response time by 20%-40% and write I/O response time by 17%-40% respectively, and balance wear nearly 30% to extend the lifetime, compared with typical GC algorithm.
Reference | Related Articles | Metrics
Improved automatic classification algorithm of software bug report in cloud environment
HUANG Wei, LIN Jie, JIANG Yu'e
Journal of Computer Applications    2016, 36 (5): 1212-1215.   DOI: 10.11772/j.issn.1001-9081.2016.05.1212
Abstract517)      PDF (705KB)(399)       Save
User-submitted bug reports are arbitrary and subjective. The accuracy of automatic classification of bug reports is not ideal. Hence it requires many human labors to intervention. With the bug reports database growing bigger and bigger, the problem of improving the accuracy of automatic classification of these reports is becoming urgent. A TF-IDF (Term Frequency-Inverse Document Freqency) based Naive Bayes (NB) algorithm was proposed. It not only considered the relationship of a term in different classes but also the relationship of a term inside a class. It was also implemented in distributed parallel environment of MapReduce model in Hadoop platform. The experimental results show that the proposed Naive Bayes algorithm improves the performance of F1 measument to 71%, which is 27 percentage points higher than the state-of-the-art method. And it is able to deal with massive amounts of data in distributed way by addding computational node to offer shorter running time and has better effective performance.
Reference | Related Articles | Metrics
Local similarity detection algorithm for time series based on distributed architecture
LIN Yang, JIANG Yu'e, LIN Jie
Journal of Computer Applications    2016, 36 (12): 3285-3291.   DOI: 10.11772/j.issn.1001-9081.2016.12.3285
Abstract631)      PDF (1125KB)(482)       Save
The CrossMatch algorithm based on the idea of Dynamic Time Warping (DTW) can be used to solve the problems of local similarity between time series. However, due to the high complexity of time and space, large amounts of computing resources are required. Thus, it is almost impossible to be used for long sequences. To solve the above mentioned problems, a new algorithm for local similarity detection based on distributed platform was proposed. The proposed algorithm was a distributed solution for CrossMatch. The problem of insufficient computing resources including time and space requirement was solved. Firstly, the series should be splited and distributed on several nodes. Secondly, the local similarity of every node's own series was dealt with. Finally, the results would be merged and assembled in order to find the local similarity of series. The experimental results show that the accuracy between the proposed algorithm and the CrossMatch algorithm is similar, and the proposed algorithm uses less time. The improved distributed algorithm can not only solve the computation problem of long sequence of time series which can not be processed by a single machine, but also improve the running speed by increasing the number of parallel computing nodes.
Reference | Related Articles | Metrics
Survivability analysis of interdependent network with incomplete information
JIANG Yuxiang, LYU Chen, YU Hongfang
Journal of Computer Applications    2015, 35 (5): 1224-1229.   DOI: 10.11772/j.issn.1001-9081.2015.05.1224
Abstract546)      PDF (1051KB)(523)       Save

This paper proposed a method for analyzing the survivability of interdependent networks with incomplete information. Firstly, the definition of the structure information and the attack information were proposed. A novel model of interdependent network with incomplete attack information was proposed by considering the process of acquiring attack information as the unequal probability sampling by using information breadth parameter and information accuracy parameter in the condition of structure information was known. Secondly, with the help of generating function and the percolation theory, the interdependent network survivability analysis models with random incomplete information and preferential incomplete information were derived. Finally, the scale-free network was taken as an example for further simulations. The research result shows that both information breadth and information accuracy parameters have tremendous impacts on the percolation threshold of interdependent network, and information accuracy parameter has more impact than information breadth parameter. A small number of high accuracy nodes information has the same survivability performance as a large number of low accuracy nodes information. Knowing a small number of the most important nodes can reduce the interdependent network survivability to a large extent. The interdependent network has far lower survivability performance than the single network even in the condition of incomplete attack information.

Reference | Related Articles | Metrics
One-class support vector data description based on local patch
YANG Xiaoming, HU Wenjun, LOU Jungang, JIANG Yunliang
Journal of Computer Applications    2015, 35 (4): 1026-1029.   DOI: 10.11772/j.issn.1001-9081.2015.04.1026
Abstract471)      PDF (736KB)(518)       Save

Because Support Vector Data Description (SVDD) fails in identifying the local geometric information, a new detection method, called One-class SVDD based on Local Patch (OCSVDDLP), was proposed. First, the data was divided into many local patches. Then, each sample was reconstructed by using the corresponding local patch. Finally, the decision model was obtained through training on the reconstruction data with SVDD. The experimental results on the artificial data set demonstrate that OCSVDDLP can not only capture the global geometric structure of the data set, but also uncover the local geometric information. Besides, the results on real-world data sets validate the effectiveness of the proposed method.

Reference | Related Articles | Metrics
New classification method based on neighborhood relation fuzzy rough set
HU Xuewei, JIANG Yun, LI Zhilei, SHEN Jian, HUA Fengliang
Journal of Computer Applications    2015, 35 (11): 3116-3121.   DOI: 10.11772/j.issn.1001-9081.2015.11.3116
Abstract514)      PDF (897KB)(579)       Save
Since fuzzy rough sets induced by fuzzy equivalence relations can not quite accurately reflect decision problems described by numerical attributes among fuzzy concept domain, a fuzzy rough set model based on neighborhood relation called NR-FRS was proposed. First of all, the definitions of the rough set model were presented. Based on properties of NR-FRS, a fuzzy neighborhood approximation space reasoning was carried out, and attribute dependency in characteristic subspace was also analyzed. Finally, feature selection algorithm based on NR-FRS was presented, and feature subsets was constructed next, which made fuzzy positive region greater than a specific threshold, thereby getting rid of redundant features and reserving attributes that have a strong capability in classification. Classification experiment was implemented on UCI standard data sets, which used Radial Basis Function (RBF) support vector machine as the classifier. The experimental results show that, compared with fast forward feature selection based on neighborhood rough set as well as Kernel Principal Component Analysis (KPCA), feature number of the subset obtained by NR-FRS model feature selection algorithm changes more smoothly and stably according to parameters. Meanwhile, average classification accuracy increases by 5.2% in the best case and varies stably according to parameters.
Reference | Related Articles | Metrics
Face recognition via kernel-based non-negative sparse representation
BO Chunjuan ZHANG Rubo LIU Guanqun JIANG Yuzhe
Journal of Computer Applications    2014, 34 (8): 2227-2230.   DOI: 10.11772/j.issn.1001-9081.2014.08.2227
Abstract296)      PDF (615KB)(390)       Save

A novel kernel-based non-negative sparse representation (KNSR) method was presented for face recognition. The contributions were mainly three aspects: First, the non-negative constraints on representation coefficients were introduced into the Sparse Representation (SR) and the kernel function was exploited to depict non-linear relationships among different samples, based on which the corresponding objective function was proposed. Second, a multiplicative gradient descent method was proposed to solve the proposed objective function, which could achieve the global optimum value in theory. Finally, local binary feature and the Hamming kernel were used to model the non-linear relationships among face samples and therefore achieved robust face recognition. The experimental results on some challenging face databases demonstrate that the proposed algorithm has higher recognition rates in comparison with algorithms of Nearest Neighbor (NN), Support Vector Machine (SVM), Nearest Subspace (NS), SR and Collaborative Representation (CR), and achieves about 99% recognition rates on both YaleB and AR databases.

Reference | Related Articles | Metrics
Hybrid discrete optimization algorithm based on gravity search and estimation of distribution
JIANG Yue SHEN Dongmei ZHAO Yan GAO Shangce
Journal of Computer Applications    2014, 34 (7): 2074-2079.   DOI: 10.11772/j.issn.1001-9081.2014.07.2074
Abstract131)      PDF (892KB)(400)       Save

According to the problem of the traditional Gravitational Search Algorithm (GSA) such as falling into the local minimum point easily, a hybrid algorithm based on Estimation of Distribution (ED) and gravitational search (GSEDA) was proposed. By characterizing the distribution of current solutions found by GSA, ED was used to generate promising solutions based on the constructed probability matrix, thus guiding the search to new solution areas. The proposed GSEDA was able to balance the exploration and exploitation of the search, therefore possessing a better local optima jumping capacity. The experimental results based on the traveling salesman problem indicate that GSEDA performs better than traditional algorithms in terms of solution quality and robustness.

Reference | Related Articles | Metrics
Topic clause identification method based on specific features
JIANG Yuru SONG Rou
Journal of Computer Applications    2014, 34 (5): 1345-1349.   DOI: 10.11772/j.issn.1001-9081.2014.05.1345
Abstract318)      PDF (739KB)(356)       Save

When identifying the Topic Clause (TC) of Punctuation Clause (PClause), the brute-force method to generate Candidate Topic Clause (CTC) causes high time consumption and low accuracy of the identification system. A new CTC generating method was proposed, which used specific features such as the PClause location in the text, the grammatical features of the topic and the adjacent features of topic and its comment. The experimental result shows that the improved method can not only improve the efficiency of the system by reducing the number of CTCs, but also make the accuracy of TC identification for single PClause and PClause sequence increase by 0.96 percentage points and 1.31 percentage points respectively over the current state.

Reference | Related Articles | Metrics
Multi-objective particle swarm optimization algorithm based on global best position adaptive selection and local search
HUANG Min JIANG Yu MAO An JIANG Qi
Journal of Computer Applications    2014, 34 (4): 1074-1079.   DOI: 10.11772/j.issn.1001-9081.2014.04.1074
Abstract389)      PDF (898KB)(340)       Save

To deal with the problems of the strategies for selecting the global best position and the low local search ability, a multi-objective particle swarm optimization algorithm based on global best position adaptive selection and local search named MOPSO-GL was proposed. During the guiding particles selection in MOPSO-GL, the Sigma method and crowding distance of the particle in the archive were used and the archive member chose the guided particles in the swarm to improve the solution diversity and the swarm uniformity. Therefore, the population might get close to the true Pareto optimal solutions uniformly and quickly. Furthermore, the improved chaotic optimization strategy based on Skew Tent map was adopted, to improve the local search ability and the convergence of MOPSO-GL when the search ability of MOPSO-GL got weak. The simulation results show that MOPSO-GL has better convergence and distribution.

Reference | Related Articles | Metrics
New medical image classification approach based on hypersphere multi-class support vector data description
XIE Guocheng JIANG Yun CHEN Na
Journal of Computer Applications    2013, 33 (11): 3300-3304.  
Abstract585)      PDF (800KB)(387)       Save
Concerning the low training speed of mammography multi-classification, the Hypersphere Multi-Class Support Vector Data Description (HSMC-SVDD) algorithm was proposed. The Hypersphere One-Class SVDD (HSOC-SVDD) was extended to a HSMC-SVDD as a kind of immediate multi-classification. Through extracting gray-level co-occurrence matrix features of mammography, then Kernel Principle Component Analysis (KPCA) was used to reduce dimension, finally HSMC-SVDD was used for classification. As each category trained only one HSOC-SVDD, its training speed was higher than that of the present multi-class classifiers. The experimental results show that compared with the combined classifier, in which the average train time is 40.2 seconds, proposed by Wei (WEI L Y, YANG Y Y, NISHIKAWA R M,et al.A study on several machine-learning methods for classification of malignant and benign clustered micro-calcifications. IEEE Transactions on Medical Imaging, 2005, 24(3): 371-380), the training time of HSMC-SVDD classifier is 21.369 seconds, the accuracy is up to 76.6929% and it is suitable for solving classification problems of many categories.
Related Articles | Metrics
Calibration based DV-Hop algorithm with credible neighborhood distance estimation
JIANG Yusheng CHEN Xian LI Ping
Journal of Computer Applications    2013, 33 (11): 3016-3018.  
Abstract686)      PDF (611KB)(343)       Save
Concerning the poor localization precision of Distance Vector-Hop (DV-Hop), a calibration based DV-Hop algorithm with credible neighborhood distance estimation (CDV-Hop) was proposed, which defined a new measure to estimate the neighborhood distances by relating the proximity of two neighbors to their connectivity difference, and then calculated the more accurate neighborhood distances. According to the unique location relationship between the unknown nodes and their nearest anchor nodes, this algorithm added the calibration step, which took the credible neighborhood distances as the calibration standard to correct the position of unknown nodes. The simulation results show that the CDV-Hop algorithm works stably in different network environment. With the ratio of anchor nodes increasing, there is an improvement of 4.57% to 10.22% in localization precision compared with DV-Hop algorithm and 3.2% to 8.93% in localization precision compared with Improved DV-Hop (IDV-Hop) algorithm.
Related Articles | Metrics
Design and implementation of electronic paper display driver software
HU Xingbo JIANG Yuan LIANG Hong GUO Yuhua FU Yonghua
Journal of Computer Applications    2013, 33 (10): 2989-2992.  
Abstract513)      PDF (580KB)(596)       Save
Electronic Paper Display (EPD) can exhibit good comfortability in reading, but it has a critical drawback - slow refresh, which will be overcome by optimizing the design of the display's driver software. A tri-buffer-based architecture as well as its design methodology for the EPD driver software was proposed in this paper. Also an e-reader integrating the EPD driver in it was implemented to verify the design. Compared with the traditional dual-buffer architecture, the proposed tri-buffer scheme set an additional memory area to keep the EPD data frame. Test results show that the driver software works well in a real device without screen flicker and can help the display to achieve excellent performance.
Related Articles | Metrics
Contention management model based on relativity-detection of conflicts
CHU Caijun HU Dasha JIANG Yuming
Journal of Computer Applications    2013, 33 (07): 2051-2054.   DOI: 10.11772/j.issn.1001-9081.2013.07.2051
Abstract769)      PDF (832KB)(583)       Save
Contention Manager (CM), which is used for the resolution of conflicting transactions, plays a significant role in the obstruction-free software transactional memory. The relativity-detection contention management model was put forward to solve the problem that the existing contention management policies' performance is sensitive to their workloads. This model could detect and analyze the relativity of conflict from the past decision-making records, then took the relativity as the basis of the current arbitration,so that it helped to get more favorable resolution results. Two benchmarks were tested and the experimental results show that it has the advantages of being flexible and adaptable. The number of detected transactions, which is committed finally, can be accounted for up to 30% of the throughput of the system. Using this model, the total transaction throughput is about 11% higher than the other reference objects.
Reference | Related Articles | Metrics
New virtual desktop antivirus model
ZHAN Xu-sheng GAO Yun-wei FENG Bai-ming JIANG Yun YANG Peng-fei
Journal of Computer Applications    2012, 32 (12): 3445-3448.   DOI: 10.3724/SP.J.1087.2012.03445
Abstract716)      PDF (607KB)(468)       Save
The existing antivirus methods take too much system overhead, consume a lot of network bandwidth and can not detect the unknown programs in time. Therefore, this paper improved the previous work and presented a new virtual desktop antivirus model regarding virtual desktop infrastructure. It supported active antivirus and passive antivirus moves. Privileged virtual machines were used to scan viruses, manage the trust-list and transmit signatures of every virtual machine to others. Agents were used to analyze the signatures and characteristics of files, optimize the bytes to be uploaded and scanned, and scan the programs timely when being loaded. The experimental results show that model can detect viruses in real-time, in the meantime reduce system overhead and network bandwidth usage.
Related Articles | Metrics
Improved CenSurE detector and a new rapid descriptor based on gradient of summed image patch
Fang CHEN Yun-liang JIANG Yun-xi XU
Journal of Computer Applications    2011, 31 (07): 1818-1821.   DOI: 10.3724/SP.J.1087.2011.01818
Abstract1496)      PDF (766KB)(880)       Save
This paper proposed a new, real-time and robust local feature and descriptor, which can be applied to computer vision field with high demands in real-time. Since CenSurE has extremely efficient computation,it has got wide attention. Due to its linear scales, the filter response signal is very sparse and cannot acquire high repeatability. Therefore, this paper modified the detector using logarithmic scale sampling, and obtained better performance. The new rapid descriptor was based on gradient of the summed image patch, called GSIP. The GSIP descriptor has superior performance. An extensive experimental evaluation was performed to show that the GSIP descriptor increases the distinctiveness of local image descriptors for image region matching and object recognition compared with the state-of-the-art SURF descriptor. Furthermore, compared with SURF, GSIP achieves a two-fold speed increase.
Reference | Related Articles | Metrics
Active scheduling protocol of local multi-line barrier coverage sensors
Ying-ying CAO Jian-jiang YU Li-cai ZHU Jia-jun SUN Xiao-xia WAN
Journal of Computer Applications    2011, 31 (04): 918-921.   DOI: 10.3724/SP.J.1087.2011.00918
Abstract1338)      PDF (711KB)(382)       Save
To meet the need of instruction detection system used in complex natural environment, such as coastal mudflats, an improved barrier coverage model, a multi-line barrier coverage scheduling protocol named k-MLBCSP, a coverage layout algorithm and a coverage adjustment algorithm were proposed. The k-MLBCSP protocol divided the network lifetime into three phases. In the initialization phase, the coverage layout algorithm guaranteed reasonable network settings. In the adjustment phase, the coverage adjustment algorithm provided an effective way for the sink and alive senosrs to further negotiate coverage layout strategies. The theoretical analysis and simulations show that compared with LBCP and RIS, k-MLBCSP increases the sensor network's coverage probability and lifetime. Furthermore, k-MLBCSP reduces the time complexity and the network load.
Related Articles | Metrics
Credit Architecture of Computational Economy for Grid Computing
MA Man-fu,WU Jian,HU Zheng-guo,CHEN Ding-jian,JIANG Yun
Journal of Computer Applications    2005, 25 (04): 940-943.   DOI: 10.3724/SP.J.1087.2005.0940
Abstract1033)      PDF (195KB)(846)       Save
Based on the computational economy model of resource management in grid computing, a credit model was presented in which the resource credit, GSP credit, GSC credit were being defined. A Grid Credit Architecture(GCA) was being designed to meet the model and then discussed the policy in credit evaluating. As to implementation problems, the deploy of credit module and Extended Resource Usage Record (ERUR) was discussed in detail. Emulation experiments show that the new architecture is efficient and valuable within grid computing economy environments.
Related Articles | Metrics
Computer audit based multi-agent system architecture
WEN Ju-feng,JIANG Yu-quan,XING Han-cheng
Journal of Computer Applications    2005, 25 (04): 923-926.   DOI: 10.3724/SP.J.1087.2005.0923
Abstract801)      PDF (205KB)(1025)       Save
A novel computer audit based multi-agent system architecture was proposed. The functions of three subsystems and relevant Agents, components of auditing mobile Agent and mobile Agent server in the model were further discussed in details. The prototype can perform real-time distributed on-line computer audit that shows that the architecture is feasible in practice.
Related Articles | Metrics
Adaptive fuzzy classifier based on table-looking learning algorithm
UANG Zhan,JIANG Yu-ying,ZHANG Lei
Journal of Computer Applications    2005, 25 (04): 750-753.   DOI: 10.3724/SP.J.1087.2005.0750
Abstract1119)      PDF (203KB)(882)       Save

An improved adaptive fuzzy system based on table-looking learning algorithm,adaptive fuzzy classifier was developed.For the pattern recognition problems,simulation studies were made by applying the adaptive fuzzy classifier and the BP 3-layer feed-forward neural network classifier to the handwritten digit recognition problems. As compared to the BP neural network,the adaptive fuzzy classifier has better performance in recognition ability,incorporationg the linguistic information and comptational simplicity.All these show the superiority and potential of adaptive fuzzy techniques in solving pattern recognition problems.

Related Articles | Metrics